Main Menu

Search

Showing posts with label ocne commands. Show all posts
Showing posts with label ocne commands. Show all posts

OCNE 2.X: "ocne cluster console" Command To Connect to OCNE OCK Node

Below command can be used to connect to OCNE 2.x OCK instance from the workstation CLI node.

ocne cluster console --node <node> --direct



Keywords

connect connecting ssh sshing nodes console consoles how to howto doc docs


OCNE: Podman/Crictl Pull Commands To Pull OCK Image

Below are crictl and podman pull commands to pull OCK image of OCNE 2.x version. Change the OCK image version in below command as needed.

crictl pull container-registry.oracle.com/olcne/ock:1.32

podman pull container-registry.oracle.com/olcne/ock:1.32



Keywords:

OCNE Oracle Cloud Native Environment olcne CNE OCK Container Engine Kubernetes command commands ocne pulling 

OCNE: Install Oracle Cloud Native Environment (OCNE) 1.9 Non HA on OCI (Oracle Cloud Infrastructure)

Below are steps to Install Oracle Cloud Native Environment (OCNE) 1.9 Non HA on OCI (Oracle Cloud Infrastructure)

1) Provision 3 OL8 instances from OCI Cloud portal - 1 for Operator, 1 for control and 1 for Worker nodes. You have have more worker nodes as well if you would like. Latest OL8 instances come with UEK7 kernel. Default OCI user is opc.

2) For opc user, enabled passwordless SSH from Operator node to control & worker nodes and to itself.

For this run below steps.

Generate public key for operator node, by running below command.
# ssh-keygen -t rsa 
Above command will generate /home/opc/.ssh/id_rsa.pub file which is public key file.

Copy the content inside /home/opc/.ssh/id_rsa.pub key on Operator node and append it to the end of /home/opc/.ssh/authorized_keys file on Operator, Control and worker nodes.

3) Verify that the passwordless SSH works from Operator node to itself and to control and worker nodes using ssh command.

4) On Operator node, Control and all worker nodes, install oracle-olcne-release-el8 package

#sudo dnf -y install oracle-olcne-release-el8

sudo dnf -y install oracle-olcne-release-el8
Last metadata expiration check: 1:27:12 ago on Wed 26 Feb 2025 03:41:24 AM GMT.
Dependencies resolved.
===========================================================================================
 Package                       Architecture Version          Repository               Size
===========================================================================================
Installing:
 oracle-ocne-release-el8       x86_64       1.0-12.el8       ol8_baseos_latest        16 k

Transaction Summary
===========================================================================================
Install  1 Package

Total download size: 16 k
Installed size: 20 k
Downloading Packages:
oracle-ocne-release-el8-1.0-12.el8.x86_64.rpm              214 kB/s |  16 kB     00:00    
-------------------------------------------------------------------------------------------
Total                                                      209 kB/s |  16 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                   1/1 
  Installing       : oracle-ocne-release-el8-1.0-12.el8.x86_64                         1/1 
  Running scriptlet: oracle-ocne-release-el8-1.0-12.el8.x86_64                         1/1 
  Verifying        : oracle-ocne-release-el8-1.0-12.el8.x86_64                         1/1 

Installed:
  oracle-ocne-release-el8-1.0-12.el8.x86_64                                                

Complete!

5) On Operator, Control and Worker nodes backup /etc/yum.repos.d/oracle-ocne-ol8.repo file and update the file to change ol8 developer repo name from ol8_developer_olcne to ol8_developer. For this run below command.
# sudo sed -i 's/ol8_developer_olcne/ol8_developer/g' /etc/yum.repos.d/oracle-ocne-ol8.repo

6) On Operator, Control, and Worker nodes, Enable OLCNE 1.9 and other OL8 & kernel yum repositories

# sudo dnf config-manager --enable ol8_olcne19 ol8_addons ol8_baseos_latest ol8_appstream ol8_kvm_appstream ol8_UEKR7


7) On Operator, Control, and Worker nodes, Disable old OCNE repos.

sudo dnf config-manager --disable ol8_olcne18 ol8_olcne17 ol8_olcne16 ol8_olcne15 ol8_olcne14 ol8_olcne13 ol8_olcne12 ol8_UEKR6


8) On Operator, Control, and Worker nodes, Verify that OCNE 1.9 repo and other repos enabled in above steps (3) are enabled.

#sudo dnf repolist enabled

sudo dnf repolist enabled
Repository ol8_developer is listed more than once in the configuration
repo id                        repo name
ol8_MySQL84                    MySQL 8.4 Server Community for Oracle Linux 8 (x86_64)
ol8_MySQL84_tools_community    MySQL 8.4 Tools Community for Oracle Linux 8 (x86_64)
ol8_MySQL_connectors_community MySQL Connectors Community for Oracle Linux 8 (x86_64)
ol8_UEKR7                      Latest Unbreakable Enterprise Kernel Release 7 for Oracle Linux 8 (x86_64)
ol8_addons                     Oracle Linux 8 Addons (x86_64)
ol8_appstream                  Oracle Linux 8 Application Stream (x86_64)
ol8_baseos_latest              Oracle Linux 8 BaseOS Latest (x86_64)
ol8_ksplice                    Ksplice for Oracle Linux 8 (x86_64)
ol8_kvm_appstream              Oracle Linux 8 KVM Application Stream (x86_64)
ol8_oci_included               Oracle Software for OCI users on Oracle Linux 8 (x86_64)
ol8_olcne19                    Oracle Cloud Native Environment version 1.9 (x86_64)


9) On Operator node

Install the olcnectl software package: 

# sudo dnf -y install olcnectl

sudo dnf -y install olcnectl
Repository ol8_developer is listed more than once in the configuration
Oracle Linux 8 BaseOS Latest (x86_64)                      215 kB/s | 4.3 kB     00:00    
Oracle Linux 8 Application Stream (x86_64)                 379 kB/s | 4.5 kB     00:00    
Oracle Linux 8 Addons (x86_64)                             286 kB/s | 3.5 kB     00:00    
Oracle Cloud Native Environment version 1.9 (x86_64)       736 kB/s |  89 kB     00:00    
Latest Unbreakable Enterprise Kernel Release 7 for Oracle  269 kB/s | 3.5 kB     00:00    
Oracle Linux 8 KVM Application Stream (x86_64)             8.4 MB/s | 1.6 MB     00:00    
Dependencies resolved.
===========================================================================================
 Package             Architecture      Version                Repository              Size
===========================================================================================
Installing:
 olcnectl            x86_64            1.9.2-3.el8            ol8_olcne19            4.8 M

Transaction Summary
===========================================================================================
Install  1 Package

Total download size: 4.8 M
Installed size: 15 M
Downloading Packages:
olcnectl-1.9.2-3.el8.x86_64.rpm                             21 MB/s | 4.8 MB     00:00    
-------------------------------------------------------------------------------------------
Total                                                       20 MB/s | 4.8 MB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                   1/1 
  Installing       : olcnectl-1.9.2-3.el8.x86_64                                       1/1 
  Verifying        : olcnectl-1.9.2-3.el8.x86_64                                       1/1 

Installed:
  olcnectl-1.9.2-3.el8.x86_64                                                              

Complete!
[opc@rhck-opr yum.repos.d]$ 



10) Run olcnectl provision command to create OCNE Kubernetes environment.

In below command replace the names for --api-server flag with operator node, --control-plane-nodes flag with control nodes, --worker-nodes with worker nodes. For --environment-name give the desired OCNE environment name of choice, for --name give Kubernetes cluster name of choice.

olcnectl provision \
--api-server rhck-opr \
--control-plane-nodes rhck-ctrl \
--worker-nodes rhck-wrkr \
--environment-name cne-rhck-env \
--name cne-rhck-nonha-cluster \
--yes

Below is console output of the success run of provision command for reference.

#olcnectl provision \
> --api-server rhck-opr \
> --control-plane-nodes rhck-ctrl \
> --worker-nodes rhck-wrkr \
> --environment-name cne-rhck-env \
> --name cne-rhck-nonha-cluster \
> --yes
INFO[26/02/25 05:34:51] Generating certificate authority             
INFO[26/02/25 05:34:51] Generating certificate for rhck-opr          
INFO[26/02/25 05:34:51] Generating certificate for rhck-ctrl         
INFO[26/02/25 05:34:52] Generating certificate for rhck-wrkr         
INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-opr 
INFO[26/02/25 05:34:52] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-opr 
INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-opr/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-opr 
INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-opr/node.key" to "/etc/olcne/certificates/node.key" on rhck-opr 
INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-ctrl 
INFO[26/02/25 05:34:52] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-ctrl 
INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-ctrl/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-ctrl 
INFO[26/02/25 05:34:52] Copying local file at "certificates/rhck-ctrl/node.key" to "/etc/olcne/certificates/node.key" on rhck-ctrl 
INFO[26/02/25 05:34:52] Creating directory "/etc/olcne/certificates/" on rhck-wrkr 
INFO[26/02/25 05:34:53] Copying local file at "certificates/ca/ca.cert" to "/etc/olcne/certificates/ca.cert" on rhck-wrkr 
INFO[26/02/25 05:34:53] Copying local file at "certificates/rhck-wrkr/node.cert" to "/etc/olcne/certificates/node.cert" on rhck-wrkr 
INFO[26/02/25 05:34:53] Copying local file at "certificates/rhck-wrkr/node.key" to "/etc/olcne/certificates/node.key" on rhck-wrkr 
INFO[26/02/25 05:34:53] Apply api-server configuration on rhck-opr:
* Install oracle-olcne-release
* Enable olcne19 repo
* Install API Server
    Add firewall port 8091/tcp
 
INFO[26/02/25 05:34:53] Apply control-plane configuration on rhck-ctrl:
* Install oracle-olcne-release
* Enable olcne19 repo
* Configure firewall rule:
    Add interface cni0 to trusted zone
    Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp 6443/tcp
* Disable swap
* Load br_netfilter module
* Load Bridge Tunable Parameters:
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
* Set SELinux to permissive
* Install and enable olcne-agent
 
INFO[26/02/25 05:34:53] Apply worker configuration on rhck-wrkr:
* Install oracle-olcne-release
* Enable olcne19 repo
* Configure firewall rule:
    Add interface cni0 to trusted zone
    Add ports: 8090/tcp 10250/tcp 10255/tcp 9100/tcp 8472/udp
* Disable swap
* Load br_netfilter module
* Load Bridge Tunable Parameters:
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
* Set SELinux to permissive
* Install and enable olcne-agent
 
Environment cne-rhck-env created.
Modules created successfully.
Modules installed successfully.
INFO[26/02/25 05:49:19] Kubeconfig for instance "cne-rhck-nonha-cluster" in environment "cne-rhck-env" written to kubeconfig.cne-rhck-env.cne-rhck-nonha-cluster


11) Update OCNE Config For running olcnectl command without --api-server argument. For this run below command on Operator node. 

In below command replace the --api-server node name with the Operator node name, --environment-name with the OCNE environment name which was given in provision command in above step (10)

olcnectl module instances \
--api-server rhck-opr:8091 \
--environment-name cne-rhck-env \
--update-config

Now again rerun olcnectl module instances command again without --api-server argument as follows. This command will list the Control & Worker Nodes and Kubernetes cluster name.
olcnectl module instances --environment-name cne-rhck-env

#olcnectl module instances --environment-name cne-rhck-env
INSTANCE               MODULE     STATE    
rhck-ctrl:8090         node       installed
rhck-wrkr:8090         node       installed
cne-rhck-nonha-cluster kubernetes installed


12) Setup the kubectl environment on Control node to run kubectl commands for Kubernetes operations. For this run below commands on Control node.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc

13) Validate kubectl is working on Control Node and Kubernetes Nodes are Ready and Pods Are In Running State. For this run below kubectl commands.
kubectl get nodes
kubectl get pods -A

Below are sample outputs for reference.

# kubectl get nodes

NAME        STATUS   ROLES           AGE   VERSION
rhck-ctrl   Ready    control-plane   11m   v1.29.9+3.el8
rhck-wrkr   Ready    <none>          10m   v1.29.9+3.el8

# kubectl get pods -A

NAMESPACE              NAME                                          READY   STATUS    RESTARTS   AGE
kube-system            coredns-5859f68d4-2z6vq                       1/1     Running   0          11m
kube-system            coredns-5859f68d4-lqxxk                       1/1     Running   0          11m
kube-system            etcd-rhck-ctrl                                1/1     Running   0          11m
kube-system            kube-apiserver-rhck-ctrl                      1/1     Running   0          11m
kube-system            kube-controller-manager-rhck-ctrl             1/1     Running   0          11m
kube-system            kube-flannel-ds-gz548                         1/1     Running   0          8m49s
kube-system            kube-flannel-ds-rmpdt                         1/1     Running   0          8m49s
kube-system            kube-proxy-ffnzs                              1/1     Running   0          10m
kube-system            kube-proxy-n7kxf                              1/1     Running   0          11m
kube-system            kube-scheduler-rhck-ctrl                      1/1     Running   0          11m
kubernetes-dashboard   kubernetes-dashboard-547d4b479c-fnjtf         1/1     Running   0          8m48s
ocne-modules           verrazzano-module-operator-6cb74478bf-xv8z2   1/1     Running   0          8m48s

Now you have a installed OCNE Kubernetes environment ready to go.

- - -
Keywords added for search:

OCNE installation

OCNE: Oracle Cloud Native Environment: Information About OCNE Kubernetes Cluster Backups

OCNE backups can be taken using below command

olcnectl module backup \

--environment-name myenvironment \

--name mycluster


Below documentation has more information on this.


https://docs.oracle.com/en/operating-systems/olcne/1.6/orchestration/backup-restore.html#backup


OCNE Backups Location is /var/olcne/backups/environment-name/kubernetes/module-name/timestamp


OCNE: Oracle Cloud Native Environment: Scaling Up and Adding Kubernetes Node Using Olcnectl Module Update Command

 Below is the procedure

1. List the current OCNE nodes using olcnectl command on Operator node as follows.

In below command change the environment name as needed.

olcnectl module instances --environment-name cne1-ha-env


olcnectl module instances --environment-name cne1-ha-env

INFO[13/10/23 03:07:56] Starting local API server                    

INFO[13/10/23 03:07:57] Starting local API server                    

INSTANCE             MODULE    STATE    

cne1-ha-control1:8090 node      installed

cne1-ha-helm         helm      created  

cne1-ha-istio        istio     installed

cne1-ha-worker1:8090 node      installed

cne1-ha-worker2:8090 node      installed

grafana              grafana   installed

prometheus           prometheus installed

cne1-ha-cluster      kubernetes installed


In above example sample output we just have one control node.


2. For adding the second control node run, olcnectl module update command as follows. Please note that there is no need to provide the load-balancer or virtual IP flags to olcnectl command for doing module update


olcnectl module update \

--environment-name cne1-ha-env \

--name cne1-ha-cluster \

--control-plane-nodes cne1-ha-control1:8090,cne1-ha-control2:8090 \

--worker-nodes cne1-ha-worker1:8090,cne1-ha-worker2:8090


$ olcnectl module update \

> --environment-name cne1-ha-env \

> --name cne1-ha-cluster \

> --control-plane-nodes cne1-ha-control1:8090,cne1-ha-control2:8090 \

> --worker-nodes cne1-ha-worker1:8090,cne1-ha-worker2:8090

INFO[13/10/23 03:09:19] Starting local API server                    

? [WARNING] Update will shift your workload and some pods will lose data if they rely on local storage. Do you want to continue? (y/N? [WARNING] Update will shift your workload and some pods will lose data if they rely on local storage. Do you want to continue? Yes

Taking backup of modules before update

Backup of modules succeeded.

Updating modules

Update successful


3. Now list the olcnectl command to list whether the newly added node is shown.


In below command change the environment name as needed.


olcnectl module instances --environment-name cne1-ha-env



[opc@cne1-ha-operator ~]$ olcnectl module instances --environment-name cne1-ha-env

INFO[13/10/23 03:13:37] Starting local API server                    

INFO[13/10/23 03:13:38] Starting local API server                    

INSTANCE             MODULE    STATE    

cne1-ha-worker1:8090 node      installed

cne1-ha-worker2:8090 node      installed

cne1-ha-cluster      kubernetes installed

cne1-ha-control1:8090 node      installed

cne1-ha-istio        istio     installed

cne1-ha-helm         helm      created  

grafana              grafana   installed

prometheus           prometheus installed

cne1-ha-control2:8090 node      installed

[opc@cne1-ha-operator ~]$ 


4. Now check whether the kube-system pods are running on the newly added node using below command on control node 1.


kubectl get pods -n kube-system


[opc@cne1-ha-operator ~]$ kubectl get pods -n kube-system

The connection to the server localhost:8080 was refused - did you specify the right host or port?

[opc@cne1-ha-operator ~]$ 

[opc@cne1-ha-operator ~]$ ssh cne1-ha-control1

Activate the web console with: systemctl enable --now cockpit.socket


Last login: Mon Oct  2 18:58:46 2023 from 10.0.1.247

[opc@cne1-ha-control1 ~]$ 

[opc@cne1-ha-control1 ~]$ kubectl get pods -n kube-system

NAME                                       READY   STATUS    RESTARTS       AGE

coredns-6fdffc7bfc-tzr9p                   1/1     Running   10             113d

coredns-6fdffc7bfc-wrws4                   1/1     Running   2              10d

etcd-cne1-ha-control1                      1/1     Running   15             195d

etcd-cne1-ha-control2                      1/1     Running   15             2d12h

kube-apiserver-cne1-ha-control1            1/1     Running   20             195d

kube-apiserver-cne1-ha-control2            1/1     Running   21             2d12h

kube-controller-manager-cne1-ha-control1   1/1     Running   21             195d

kube-controller-manager-cne1-ha-control2   1/1     Running   15             2d12h

kube-flannel-ds-8ttpr                      1/1     Running   21             140d

kube-flannel-ds-r7pgc                      1/1     Running   20 (15m ago)   140d

kube-flannel-ds-rvc7l                      1/1     Running   20             140d

kube-flannel-ds-wlqzf                      1/1     Running   3 (15m ago)    2d12h

kube-proxy-9hdzl                           1/1     Running   15             195d

kube-proxy-hqx5n                           1/1     Running   15             195d

kube-proxy-t5ckp                           1/1     Running   15             195d

kube-proxy-xz9t5                           1/1     Running   1              2d12h

kube-scheduler-cne1-ha-control1            1/1     Running   18             195d

kube-scheduler-cne1-ha-control2            1/1     Running   16             2d12h

metrics-server-77dfc8475-qskxw             1/1     Running   13             134d



Oracle Cloud Native Environment (OCNE): OLCNECTL Command To List Kubernetes Node Instances

Below command can be used.

olcnectl module instances --environment-name=<env name>

Below is sample output.

INFO[16/11/22 16:35:26] Starting local API server                    

INSTANCE                        MODULE          STATE    
control.Test.com:8090      node            installed
ocne-cluster                    kubernetes      installed
worker1.Test.com:8090      node            installed
worker2.Test.com:8090      node            installed